From the Orlando Sentinal is this report about police abusing the FL DMV database. The is more about it at the Reason blog.
Government databases will always be abused. That’s the nature of man and there is no use fighting it. Which is why massive government databases should not be created to begin with, unless there is no alternative.
John Fontana writes about a new idea called People Centric Security. The idea is to loosen enterprise security policies so that security decisions are made by those directly responsible for business area rather than a central security team.
To paraphrase the immortal words of Pogo: We have met the security team and they is us!
For better or worse I think this actually reflects the current state rather than some new idea. For all the work security teams do, users just work around them to do what they need to do.
Who many times have you heard these conversations:
- The mail server blocked your attachment. Can you send it to my gmail account?
- I can’t reach your website. Let me disconnect from the VPN and try again.
- Our machines disallow USB storage devices, but I can upload the files to DropBox.
Your company’s security already depends on your users. They are just pretending it doesn’t.
This is a wonderful story about the hacking of Marconi’s wireless system in 1903. Marconi touted the security of his system based on a tight (and presumably not publicly disclosed) frequency bandwidth. Of course it was hacked in a public and humiliating fashion.
Security via obscurity, as effective in 1903 as it is today.
Hat tip to Bruce Schneier.
There are a couple of interesting articles on Stuxnet out recently. This article poses the astonishing possibility that it was a directed attack at the Iranian Bushehr nuclear plant. The arguments given, however, are highly circumstantial.
This article also puts forth the notion that Stuxnet was likely created by some government.
Is this the first instance of SaaW, software as a weapon?
There are some interesting tidbits coming out about the Chinese hack of Google. Apparently the source code to Google’s SSO technology was a target (although this is misstated in the headline as a “password system”). It’s unknown at this point what source code (if any) was taken, but this highlights the nightmare scenario of the SSO world.
If a vulnerability is found in your token generation code such that someone can spoof a token, then your SSO system and every system connected to it is compromised.
Of course just having the source code is not in itself a problem. Typically there is a private key that is used to encrypt or sign the token. But protecting that private key is the issue and that is where the source code is key. If you think your key has been compromised you can replace it. But the code that authenticates the user and generates the token needs to get the private key to do the encryption (or signing (or both)). If the secret algorithm to access that key is compromised, then the attacker can then attempt to penetrate the system where that key lives and get the key. With the key and token generating code in hand the attacker can then access any SSO protected system.
And here is an ugly secret. If the SSO technology is public key encryption, they key on needs to exist where the token is initially generated. If it’s based on symmetric key encryption then the key has to exist on every server in the SSO environment.
So just use public key encryption, that solves the problem right? Not so fast. One critical aspect of SSO is inactive session timeout. That requires the token to be “refreshed” when used so that it expired based on inactivity. Refreshing the token at every server in the SSO system (every PEP if you will) requires either that server to have the key, or it make a remote calls to a common authentication service to refresh the token.
There are pluses and minuses to both approaches. One puts the keys to the kingdom in more locations but the other adds overhead to the token refresh. When security and performance collide, who do you think usually wins?
These kinds of trade offs are what make SSO so interesting to me.
Note that I am not talking about federated SSO (SAML or openid) or intranet SSO (Kerberos) as they present a different set of challenges.
Steve Chapman poses the question, “would you volunteer to carry a device that lets the police monitor your location 24×7, every day?” He then lets you in on a secret, you already have. In fact chances are you have the locator on your person at this very moment.
It’s called a cell phone.
Just think of the privacy implications here. The government can tell if you spend the night at someone elses house, visit a red light district, attend a political rally, drive too fast, or get a medical procedure. They can know where you are at all times, both when you are out in public or when you are in a private residence.
Oh, and the current administration (like the last one) doesn’t think a warrant should be required for any of this.
Jonathan Sander of Quest has this to say about the coming identity apocalypse. Interesting stuff.
This got me thinking to a fascinating aspect of identity management in the ASP (and SaaS) space, and that it the delegated nature of identity. For example my current employer CareMedic (now part of Ingenix) offers hosted services where authorization decisions are made based on the identity of the user. Since these are medical revenue cycle applications, the authorization decisions are covered by various regulations such as HIPPA.
But here is the interesting part. We don’t really need verify that the identity we know is actually a specific person. We trust our customers (the health care service providers) to validate that the identities they provide us are properly vetted and they determine the roles that those identities fulfill.
And this is the fundamental trust issue pertaining to the identity providers that Jonathan Sander discusses. The entity with the financial stake must validate the real person behind the identity.
Beware of greeks bearing gifts, or schools issuing laptops. Of course this situation could be addressed by a simple application of electrical tape.
You have to wonder exactly what the school was thinking would happen. How do you not get sued when you do something so monumentally dumb?
Nico Popp suggests that incidents such as the recent Google hack may lead to governments and large corporations adopting a form of Mutually Assured Destruction cyber defense.
On one hand there is a lot of sense in this, especially for governments. However I suspect retaliation would be more of a economic (or worst case military) nature.
At some level that’s exactly what is going on with the Google case. Google obviously believes that the Chinese government is behind the attack and Google has retaliated by threatening to stop censoring content in China, even at the risk of getting thrown out of the country. Of course now they seem to be backing down and both sides are now looking for a face saving compromise.
But one problem with the MAD theory of cyber-warfare is that you most often don’t have any idea who to retaliate against. At least not with sufficient degree of certainty.
So for now, MAD looks pretty unlikely in the cyber-warfare game.